205 research outputs found

    Optimal Sensor Placement in Environmental Research: Designing a Sensor Network under Uncertainty

    Get PDF
    Abstract. One of our main challenges in meteorology and environment research is that in many important remote areas, sensor coverage is sparse, leaving us with numerous blind spots. Placement and maintenance of sensors in these areas are expensive. It is therefore desirable to find out how, within a given budget, we can design a sensor network are important activities was developing reasonable techniques for sensor that would provide us with the largest amount of useful information while minimizing the size of the “blind spot ” areas which is not covered by the sensors. This problem is very difficult even to formulate in precise terms because of the huge uncertainty. There are two important aspects of this problem: (1) how to best distribute the sensors over the large area, and (2) what is the best location of each sensor in the corresponding zone. There is some researcj on the first aspect of the problem. In this paper, we illustrate the second aspect of the problem, on the example of optimal selection of locations for the Eddy towers, an important micrometeorological instrument

    Franco-Japanese Research Collaboration on Constraint Programming

    Get PDF
    International audienceConstraint programming is an emergent technology that allows modeling and solving various problems in many areas such as artificial intelligence, computer programming, computer-aided design, computer graphics, and user interfaces. In this report, we provide recent activities of research collaboration on constraint programming conducted by the authors and other researchers in France and Japan. First, we outline our joint research projects on constraint programming, and then present the backgrounds, goals, and approaches of several research topics treated in the projects. Second, we describe the two Franco-Japanese Workshops on Constraint Programming (FJCP), which we organized in Japan in October 2004 and in France in November 2005. We conclude with future prospects for collaboration between French and Japanese researchers in this area

    Interval-type and affine arithmetic-type techniques for handling uncertainty in expert systems

    Get PDF
    AbstractExpert knowledge consists of statements Sj (facts and rules). The facts and rules are often only true with some probability. For example, if we are interested in oil, we should look at seismic data. If in 90% of the cases, the seismic data were indeed helpful in locating oil, then we can say that if we are interested in oil, then with probability 90% it is helpful to look at the seismic data. In more formal terms, we can say that the implication “if oil then seismic” holds with probability 90%. Another example: a bank A trusts a client B, so if we trust the bank A, we should trust B too; if statistically this trust was justified in 99% of the cases, we can conclude that the corresponding implication holds with probability 99%.If a query Q is deducible from facts and rules, what is the resulting probability p(Q) in Q? We can describe the truth of Q as a propositional formula F in terms of Sj, i.e., as a combination of statements Sj linked by operators like &, ∨, and ¬; computing p(Q) exactly is NP-hard, so heuristics are needed.Traditionally, expert systems use technique similar to straightforward interval computations: we parse F and replace each computation step with corresponding probability operation. Problem: at each step, we ignore the dependence between the intermediate results Fj; hence intervals are too wide. Example: the estimate for P(A∨¬A) is not 1. Solution: similar to affine arithmetic, besides P(Fj), we also compute P(Fj&Fi) (or P(Fj1&⋯&Fjd)), and on each step, use all combinations of l such probabilities to get new estimates. Results: e.g., P(A∨¬A) is estimated as 1

    Failure analysis of a complex system based on partial information about subsystems, with potential applications to aircraft maintenance

    Get PDF
    In many real-life applications (e.g., in aircraft maintenance), we need to estimate the probability of failure of a complex system (such as an aircraft as a whole or one of its subsystems). Complex systems are usually built with redundancy allowing them to withstand the failure of a small number of components. In this paper, we assume that we know the structure of the system, and, as a result, for each possible set of failed components, we can tell whether this set will lead to a system failure. For each component A, we know the probability P(A) of its failure with some uncertainty: e.g., we know the lower and upper bounds P(A) and P(A) for this probability. Usually, it is assumed that failures of different components are independent events. Our objective is to use all this information to estimate the probability of failure of the entire the complex system. In this paper, we describe several methods for solving this problem, including a new efficient method for such estimation based on Cauchy deviates

    Efficient Geophysical Technique of Vertical Line Elements as a Natural Consequence of General Constraints Techniques

    No full text
    One of the main objectives of geophysics is to find how density d and other physical characteristics depend on a 3-D location (x,y,z). In general, in numerical methods, a way to find the dependence d(x,y,z) is to discretize the space, and to consider, as unknown, e.g., values d(x,y,z) on a 3-D rectangular grid. In this case, the desired density distribution is represented as a combination of point-wise density distributions. In geophysics, it turns out that a more efficient way to find the desired distribution is to represent it as a combination of thin vertical line elements that start at some depth and go indefinitely down. In this paper, we show that the empirical success of such vertical line element techniques can be naturally explained if we recall that, in addition to the equations which relate the observations and the unknown density, we also take into account geophysics-motivated constraints

    Model-Order Reduction Using Interval Constraint Solving Techniques

    No full text
    Many natural phenomena can be modeled as ordinary or partial differential equations. A way to find solutions of such equations is to discretize them and to solve the corresponding (possibly) nonlinear large systems of equations. Solving a large nonlinear system of equations is very computationally complex due to several numerical issues, such as high linear-algebra cost and large memory requirements. Model-Order Reduction (MOR) has been proposed as a way to overcome the issues associated with large dimensions, the most used approach for doing so being Proper Orthogonal Decomposition (POD). The key idea of POD is to reduce a large number of interdependent variables (snapshots) of the system to a much smaller number of uncorrelated variables while retaining as much as possible of the variation in the original variables. In this work, we show how intervals and constraint solving techniques can be used to compute all the snapshots at once (I-POD). This new process gives us two advantages over the traditional POD method: 1. handling uncertainty in some parameters or inputs; 2. reducing the snapshots computational cost

    Greedy algorithms for optimizing multivariate horner schemes

    No full text
    For univariate polynomials f(x1), Horner’s scheme provides the fastest way to compute a value. For multivariate polynomials, several different version of Horner’s scheme are possible; it is not clear which of them is optimal. In this paper, we propose a greedy algorithm, which it is hoped will lead to good computation times. The univariate Horner scheme has another advantage: if the value x1 is known with uncertainty, and we are interested in the resulting uncertainty in f(x1), then Horner scheme leads to a better estimate for this uncertainty that many other ways of computing f(x1). The second greedy algorithm that we propose tries to find the multivariate Horner scheme that leads to the best estimate for the uncertainty in f(x1,..., xn).
    corecore